46 research outputs found

    INTELLIGENT VISION-BASED NAVIGATION SYSTEM

    Get PDF
    This thesis presents a complete vision-based navigation system that can plan and follow an obstacle-avoiding path to a desired destination on the basis of an internal map updated with information gathered from its visual sensor. For vision-based self-localization, the system uses new floor-edges-specific filters for detecting floor edges and their pose, a new algorithm for determining the orientation of the robot, and a new procedure for selecting the initial positions in the self-localization procedure. Self-localization is based on matching visually detected features with those stored in a prior map. For planning, the system demonstrates for the first time a real-world application of the neural-resistive grid method to robot navigation. The neural-resistive grid is modified with a new connectivity scheme that allows the representation of the collision-free space of a robot with finite dimensions via divergent connections between the spatial memory layer and the neuro-resistive grid layer. A new control system is proposed. It uses a Smith Predictor architecture that has been modified for navigation applications and for intermittent delayed feedback typical of artificial vision. A receding horizon control strategy is implemented using Normalised Radial Basis Function nets as path encoders, to ensure continuous motion during the delay between measurements. The system is tested in a simplified environment where an obstacle placed anywhere is detected visually and is integrated in the path planning process. The results show the validity of the control concept and the crucial importance of a robust vision-based self-localization process

    Integrating Constrained Experiments in Long-term Human-Robot Interaction using Task– and Scenario–based Prototyping

    Get PDF
    © 2015 The Author(s). Published with license by Taylor & Francis© Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, and Wan Ching Ho. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The moral rights of the named author(s) have been asserted. Permission is granted subject to the terms of the License under which the work was published. Please check the License conditions for the work which you wish to reuse. Full and appropriate attribution must be given. This permission does not cover any third party copyrighted material which may appear in the work requested.In order to investigate how the use of robots may impact everyday tasks, 12 participants interacted with a University of Hertfordshire Sunflower robot over a period of 8 weeks in the university’s Robot House.. Participants performed two constrained tasks, one physical and one cognitive , 4 times over this period. Participant responses were recorded using a variety of measures including the System Usability Scale and the NASA Task Load Index . The use of the robot had an impact on the experienced workload of the participants differently for the two tasks, and this effect changed over time. In the physical task, there was evidence of adaptation to the robot’s behaviour. For the cognitive task, the use of the robot was experienced as more frustrating in the later weeks.Peer reviewedFinal Published versio

    Humans' Perception of a Robot Moving Using a Slow in and Slow Out Velocity Profile

    Get PDF
    © 2019 IEEE - All rights reservedHumans need to understand and trust the robots they are working with. We hypothesize that how a robot moves can impact people’s perception and their trust. We present a methodology for a study to explore people’s perception of a robot using the animation principle of slow in, slow out—to change the robot’s velocity profile versus a robot moving using a linear velocity profile. Study participants will interact with the robot within a home context to complete a task while the robot moves around the house. The participants’ perceptions of the robot will be recorded using the Godspeed Questionnaire. A pilot study shows that it is possible to notice the difference between the linear and the slow in, slow out velocity profiles, so the full experiment planned with participants will allow us to compare their perceptions based on the two observable behaviors.Final Accepted Versio

    A Narrative Approach to Human-Robot Interaction Prototyping for Companion Robots

    Get PDF
    © 2020 Kheng Lee Koay et al., published by De Gruyter This work is licensed under the Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/This paper presents a proof of concept prototype study for domestic home robot companions, using a narrative-based methodology based on the principles of immersive engagement and fictional enquiry, creating scenarios which are inter-connected through a coherent narrative arc, to encourage participant immersion within a realistic setting. The aim was to ground human interactions with this technology in a coherent, meaningful experience. Nine participants interacted with a robotic agent in a smart home environment twice a week over a month, with each interaction framed within a greater narrative arc. Participant responses, both to the scenarios and the robotic agents used within them are discussed, suggesting that the prototyping methodology was successful in conveying a meaningful interaction experience.Peer reviewe

    The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario

    Get PDF
    Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe

    In good company? : Perception of movement synchrony of a non-anthropomorphic robot

    Get PDF
    Copyright: © 2015 Lehmann et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot’s likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot®3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants’ perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.Peer reviewe

    How does peoples’ perception of control depend on the criticality of a task performed by a robot Paladyn

    Get PDF
    © 2019 Adeline Chanseau et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 Public License.Robot companions are starting to become more common and people are becoming more familiar with devices such as Google Home, Alexa or Pepper,one must wonder what is the optimum way for people to control their devices? This paper provides presents an investigation into how much direct control people want to have of their robot companion and how dependent this is on the criticality of the tasks the robot performs. A live experiment was conducted in the University of Hertfordshire Robot House, with a robot companion performing four different type of tasks. The four tasks were: booking a doctor’s appointment, helping the user to build a Lego character, doing a dance with the user, and carrying biscuits for the user. The selection of these tasks was based on our previous research to define tasks which were relatively high and low in criticality. The main goal of the study was to find what level of direct control over their robot participants and if this was dependent on the criticality of the task performed by the robot. Fifty people took part in the study, and each experienced every task in a random order. Overall,it was found that participants’ perception of control was higher when the robot was performing a task in a semi-autonomous mode. However, for the task "carrying biscuits", although participants perceived to be more in control with the robot performing the task in a semi autonomous mode, they actually preferred to have the robot performing the task automatically (where they felt less in control). The results also show that, for the task "booking a doctor’s appointment", considered to be the most critical of all four tasks, participants did not prefer that the robot chose the date of the appointment as they felt infantilised.Peer reviewe

    Differences of Human Perceptions of a Robot Moving using Linear or Slow in, Slow out Velocity Profiles When Performing a Cleaning Task

    Get PDF
    We investigated how a robot moving with different velocity profiles affects a person's perception of it when working together on a task. The two profiles are the standard linear profile and a profile based on the animation principles of slow in, slow out. The investigation was accomplished by running an experiment in a home context where people and the robot cooperated on a clean-up task. We used the Godspeed series of questionnaires to gather people's perception of the robot. Average scores for each series appear not to be different enough to reject the null hypotheses, but looking at the component items provides paths to future areas of research. We also discuss the scenario for the experiment and how it may be used for future research into using animation techniques for moving robots and improving the legibility of a robot's locomotion

    Prototyping Realistic Long-Term Human-Robot Interaction for the Study of Agent Migration

    Get PDF
    Kheng Koay, Dag Sverre Syrdal and Kerstin Dautenhahn, 'Prototyping Realistic Long-Term Human-Robot Interaction for the Study of Agent Migration', paper presented at the IEEE International Symposium . Columbia University, New York City, New York, USA, 26-31 August 2016.This paper examines participants’ experiences of interacting with a robotic companion (agent) that has the ability to move its “mind” between different robotic embodiments to take advantage of the features and functionalities associated with the different embodiments in a process called agent migration. In particular, we focus on identifying factors that can help the companion retain its identity in different embodiments. This includes examining the clarity of the migration behaviour and how this behaviour may contribute to identity retention. Nine participants took part in a long-term study, and interacted with the robotic companion in the smart house twice-weekly over a period of 5 weeks. We used Narrative-based Integrated Episodic Scenario (NIES) framework for designing long-term interaction scenarios that provided habituation and intervention phases while conveying the impression of continuous long-term interaction. The results show that NEIS allows us to explore complex intervention scenarios and obtain a sense of continuity of context across the long-term study. The results also suggest that as participants become habituated with the companion, they found the realisation of migration signaling clearer, and felt more certain of the identity of the companion in later sessions, and that the most important factor for this was the agent’s continuation of tasks across embodiments. This paper is both empirical as well as methodological in nature.Non peer reviewedFinal Accepted Versio
    corecore